5 research outputs found

    Unsupervised Classification of Intrusive Igneous Rock Thin Section Images using Edge Detection and Colour Analysis

    Full text link
    Classification of rocks is one of the fundamental tasks in a geological study. The process requires a human expert to examine sampled thin section images under a microscope. In this study, we propose a method that uses microscope automation, digital image acquisition, edge detection and colour analysis (histogram). We collected 60 digital images from 20 standard thin sections using a digital camera mounted on a conventional microscope. Each image is partitioned into a finite number of cells that form a grid structure. Edge and colour profile of pixels inside each cell determine its classification. The individual cells then determine the thin section image classification via a majority voting scheme. Our method yielded successful results as high as 90% to 100% precision.Comment: To appear in 2017 IEEE International Conference On Signal and Image Processing Application

    Unsupervised Segmentation of Action Segments in Egocentric Videos using Gaze

    Full text link
    Unsupervised segmentation of action segments in egocentric videos is a desirable feature in tasks such as activity recognition and content-based video retrieval. Reducing the search space into a finite set of action segments facilitates a faster and less noisy matching. However, there exist a substantial gap in machine understanding of natural temporal cuts during a continuous human activity. This work reports on a novel gaze-based approach for segmenting action segments in videos captured using an egocentric camera. Gaze is used to locate the region-of-interest inside a frame. By tracking two simple motion-based parameters inside successive regions-of-interest, we discover a finite set of temporal cuts. We present several results using combinations (of the two parameters) on a dataset, i.e., BRISGAZE-ACTIONS. The dataset contains egocentric videos depicting several daily-living activities. The quality of the temporal cuts is further improved by implementing two entropy measures.Comment: To appear in 2017 IEEE International Conference On Signal and Image Processing Application

    Preliminary Experiment Results of Left Ventricular Remodelling Prediction Using Machine Learning Algorithms

    No full text
    Left ventricular remodelling involves changes in the ventricular size, shape and function where abnormalities eventually lead to heart failure. Early prediction of left ventricular remodelling can help in enhancing clinical decision making in cardiac health management and reducing cardiovascular mortality. Although cardiac magnetic resonance imaging is increasingly being used in clinical assessment of cardiovascular diseases, there is scarce study on predicting the presence of left ventricular remodelling given the derived data from cardiac magnetic resonance images. Four parameters namely left ventricular end diastolic volume, left ventricular end systolic volume, ejection fraction and occurrence/absence of oedema are used for prediction. A preliminary experiment is conducted where multi-layer perceptron and support vector machine are trained with the parameters obtained from cardiac magnetic resonance images in predicting between patients with left ventricular remodelling or normal. The preliminary experimental results indicated that support vector machine model performed better than multi-layer perception

    Rendering Realistic Subject-Dependent Expression Images by Learning 3DMM Deformation Coefficients

    No full text
    Automatic analysis of facial expressions is now attracting an increasing interest, thanks to the many potential applications it can enable. However, collecting images with labeled expression for large sets of images or videos is a quite complicated operation that, in most of the cases, requires substantial human intervention. In this paper, we propose a solution that, starting from a neutral image of a subject, is capable of producing a realistic expressive face image of the same subject. This is possible thanks to the use of a particular 3D morphable model (3DMM) that can effectively and efficiently fit to 2D images, and then deform itself under the action of deformation parameters learned expression-by-expression in a subject-independent manner. Ultimately, the application of such deformation parameters to the neutral model of a subject allows the rendering of realistic expressive images of the subject. Experiments demonstrate that such deformation parameters can be learned from a small set of training data using simple statistical tools; despite this simplicity, very realistic subject-dependent expression renderings can be obtained. Furthermore, robustness to cross dataset tests is also evidenced
    corecore